RDMA over Converged Ethernet
RDMA over Converged Ethernet (RoCE) is a network protocol that allows remote direct memory access over an Ethernet network. RoCE is a link layer protocol and hence allows communication between any two hosts in the same Ethernet broadcast domain. Although the RoCE protocol benefits from the characteristics of a converged Ethernet network, the protocol can also be used on a traditional or non-converged Ethernet network.[1]
Background information
Network-intensive applications like networked storage or cluster computing need a network infrastructure with a high bandwidth and low latency. The advantages of RDMA over other network APIs like the Berkely socket API are lower latency, lower CPU load and higher bandwidth.[2] The RoCE protocol allows lower latencies than its predecessor, the iWARP protocol.[3] There exist RoCE HCAs with a latency as low as 1.3 microseconds[4] while the lowest known iWARP HCA latency today is 2 microseconds[5].
RoCE versus InfiniBand
RoCE defines how to perform RDMA over Ethernet while the InfiniBand Architecture Specification defines how to perform RDMA over an InfiniBand network. RoCE is expected to bring InfiniBand applications which are predominantly based on clusters on to a common Ethernet converged fabric.[6] Others expect that InfiniBand will keep offering a higher bandwidth and lower latency than what is possible with RoCE.[7] While Ethernet is a more familiar technology to most than InfiniBand, the cost of InfiniBand equipment, especially switches, is lower than that of 10 GbE equipment.[8] Another difference between the Ethernet and InfiniBand technologies is that InfiniBand networks are more energy efficient.[9]
RoCE versus iWARP
While the RoCE specification defines how to perform RDMA over the Ethernet link layer, iWARP is a standard that defines how to perform RDMA over a connection-oriented transport like TCP. That means that unlike RoCE, iWARP is neither bound to Ethernet nor limited to a single Ethernet broadcast domain. However, the memory requirements of many connections along with TCP's flow and reliability controls lead to scalability and performance issues for large-scale HPC and datacenter applications.[10] Also, multicast is defined in the RoCE specification while the current iWARP specification does not define how to perform multicast RDMA.[11][12][13]
Criticism
Some aspects that should have been defined in the RoCE specification but have been left out. These are:
- How to translate between primary RoCE GIDs and Ethernet MAC addresses.[14]
- How to translate between secondary RoCE GIDs and Ethernet MAC addresses. It is not clear whether it is possible to implement secondary GIDs in the RoCE protocol without adding a RoCE-specific address resolution protocol.
- How to implement VLANs for the RoCE protocol. Current implementations store the VLAN ID in the twelfth and thirteenth byte of the sixteen-byte GID, although the RoCE specification does not mention VLANs at all.[15]
- How to translate between RoCE multicast GIDs and Ethernet MAC addresses. Current implementations use the same address mapping that has been specified for mapping IPv6 multicast addresses to Ethernet MAC addresses.[16][17] This is dangerous though because on a network where MLD has been enabled in Ethernet switches MLD, if a RoCE and an IPv6 multicast address map to the same Ethernet address, MLD snooping may cause the RoCE traffic not to be sent out over all switch ports it should be sent out.[18]
See also
References
- ^ InfiniBand Trade Association, InfiniBand™ Architecture Specification Release 1.2.1 Annex A16: RoCE, InfiniBand Trade Association, April 2010.
- ^ Cameron, Don, Regnier, Greg. Virtual Interface Architecture, ISBN 978-0971288706, Intel Press, 2002.
- ^ Feldman, Michael, RoCE: An Ethernet-InfiniBand Love Story, HPC wire, April 2010.
- ^ End-to-End Lowest Latency Ethernet Solution for Financial Services, Mellanox, March 2011.
- ^ Chelsio Announces 4th Generation Terminator Chip, Chelsio, March 2010.
- ^ Rick Merritt, New converged network blends Ethernet, InfiniBand, EE Times, April 2010.
- ^ Sean Michael Kerner, InfiniBand Moving to Ethernet ?, Enterprise Networking Planet, April 2010.
- ^ David Gross, Will New QDR InfiniBand Leap Ahead of 40 Gigabit Ethernet?, Seeking Alpha, January 2009.
- ^ Dennis Abts, Energy Proportional Datacenter Networks, 37th International Symposium on Computer Architecture (ISCA), 2010.
- ^ Rashti, Mohammad, iWARP Redefined: Scalable Connectionless Communication over High-Speed Ethernet, International Conference on High Performance Computing (HiPC), 2010.
- ^ H. Shah et al. (October 2007). "Direct Data Placement over Reliable Transports". RFC 5041.
- ^ C. Bestler et al. (October 2007). "Stream Control Transmission Protocol (SCTP) Direct Data Placement (DDP) Adaptation". RFC 5043.
- ^ P. Culley et al. (October 2007). "Marker PDU Aligned Framing for TCP Specification". RFC 5044.
- ^ Dreier, Roland, Two notes on IBoE, Roland Dreier's blog, December 2010.
- ^ Eli Cohen, IB/core: Add VLAN support for IBoE, Linux kernel patch, August 2010.
- ^ Eli Cohen, RDMA/cm: Add RDMA CM support for IBoE devices, October 2010.
- ^ M. Crawford, RFC 2464 - Transmission of IPv6 Packets over Ethernet Networks, IETF, 1998.
- ^ Bart Van Assche, Software RoCE driver source code comments, linux-rdma mailing list, August 2011.